Correction to "Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation"
نویسندگان
چکیده
The simultaneous perturbation stochastic approximation (SPSA) algorithm has recently attracted considerable attention for optimization problems where it is di cult or impossible to obtain a direct gradient of the objective (say, loss) function. The approach is based on a highly e cient simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo process. The objective is to minimize the mean square error of the estimate. We also consider maximization of the likelihood that the estimate be con ned within a bounded symmetric region of the true parameter. The optimal distribution for the components of the simultaneous perturbation vector is found to be a symmetric Bernoulli in both cases. We end the paper with a numerical study related to the area of experiment design.
منابع مشابه
Deterministic Perturbations For Simultaneous Perturbation Methods Using Circulant Matrices
We consider the problem of finding optimal parameters under simulation optimization setup. For a pdimensional parameter optimization, the classical KieferWolfowitz Finite Difference Stochastic Approximation (FDSA) scheme uses p+1 or 2p simulations of the system feedback for one-sided and two-sided gradient estimates respectively. The dependence on the dimension p makes FDSA impractical for high...
متن کاملOn an Efficient Distribution of Perturbations for Simulation Optimization using Simultaneous Perturbation Stochastic Approximation
Stochastic approximation as a method of simulation optimization is well-studied and numerous practical applications exist. One approach, simultaneous perturbation stochastic approximation (SPSA), has proven to be an efficient algorithm for such purposes. SPSA uses a centered difference approximation to the gradient based on two function evaluations regardless of the dimension of the problem. It...
متن کاملAccelerated randomized stochastic optimization1
We propose a general class of randomized gradient estimates to be employed in the recursive search of the minimum of an unknown multivariate regression function. Here only two observations per iteration step are used. As special cases it includes random direction stochastic approximation (Kushner and Clark), simultaneous perturbation stochastic approximation (Spall) and a special kernel based s...
متن کاملStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
We present a stochastic approximation algorithm based on penalty function method and a simultaneous perturbation gradient estimate for solving stochastic optimization problems with general inequality constraints. We present a general convergence result that applies to a class of penalty functions including the quadratic penalty function, the augmented Lagrangian, and the absolute penalty functi...
متن کاملStochastic optimisation with inequality constraints using simultaneous perturbations and penalty functions
We present a stochastic approximation algorithm based on penalty function method and a simultaneous perturbation gradient estimate for solving stochastic optimisation problems with general inequality constraints. We present a general convergence result that applies to a class of penalty functions including the quadratic penalty function, the augmented Lagrangian, and the absolute penalty functi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- IEEE Trans. Automat. Contr.
دوره 44 شماره
صفحات -
تاریخ انتشار 1999